# Knowledge Reasoning
GLM 4 32B 0414 EXL3
Apache-2.0
GLM-4-32B-0414 is a large-scale language model developed by the THUDM team, based on the GLM architecture, suitable for various text generation tasks.
Large Language Model
G
owentruong
36
2
Kanana Nano 2.1b Instruct
Kanana is a bilingual (Korean/English) language model series developed by Kakao. This 2.1B parameter version outperforms similar models in Korean while maintaining efficient computational costs.
Large Language Model
Transformers Supports Multiple Languages

K
kakaocorp
5,994
59
E.star.7.b
Apache-2.0
A 7B-parameter large language model based on the Mistral architecture, efficiently trained using Unsloth and TRL libraries, demonstrating excellent performance in multiple benchmarks.
Large Language Model
Transformers English

E
liminerity
86
2
Misted 7B
Apache-2.0
Misted-7B is a 7B-parameter large language model based on the fusion of OpenHermes-2-Mistral-7B and Mistral-7B-SlimOrca, primarily designed for text generation tasks.
Large Language Model
Transformers English

M
Walmart-the-bag
386
8
Orca Mini 13b
orca_mini_13b is a text generation model trained on multiple high-quality datasets, focusing on instruction following and dialogue tasks.
Large Language Model
Transformers English

O
pankajmathur
79
100
My QA
A Q&A model fine-tuned based on hfl/chinese-pert-large, suitable for Chinese Q&A tasks
Large Language Model
Transformers

M
cgt
25
0
Featured Recommended AI Models